8 research outputs found

    AN ALGORITHM FOR RECONSTRUCTING THREE-DIMENSIONAL IMAGES FROM OVERLAPPING TWO-DIMENSIONAL INTENSITY MEASUREMENTS WITH RELAXED CAMERA POSITIONING REQUIREMENTS, WITH APPLICATION TO ADDITIVE MANUFACTURING

    Get PDF
    Cameras are everywhere for security purposes and there are often many cameras installed close to each other to cover areas of interest, such as airport passenger terminals. These systems are often designed to have overlapping fields of view to provide different aspects of the scene to review when, for example, law enforcement issues arise. However, these cameras are rarely, if ever positioned in a way that would be conducive to conventional stereo image processing. To address this, issue an algorithm was developed to rectify images measured under such conditions, and then perform stereo image reconstruction. The initial experiments described here were set up using two scientific cameras to capture overlapping images in various cameras positons. The results showed that the algorithm was accurately reconstructing the three-dimensional (3-D) surface locations of the input objects. During the research an opportunity arose to further develop and test the algorithms for the problem of monitoring the fabrication process inside a 3-D printer. The geometry of 3-D printers prevents the location of cameras in the conventional stereo imaging geometry, making the algorithms described above seem like an attractive solution to this problem. The emphasis in 3-D printing on using extremely low cost components and open source software, and the need to develop the means of comparing observed progress in the fabrication process to a model of the device being fabricated posed additional development challenges. Inside the 3-D printer the algorithm was applied using two scientific cameras to detect the errors during the printing of the low-cost open-source RepRap style 3-D printer developed by the Michigan Tech’s Open Sustainability Technology Lab. An algorithm to detect errors in the shape of a device being fabricated using only one camera was also developed. The results show that a 3-D reconstruction algorithm can be used to accurately detect the 3-D printing errors. The initial development of the algorithm was in MATLAB. The cost of the MATLAB software might prevent it from being used by open-source communities. Thus, the algorithm was ported to Python and made open-source for everyone to use and customize. To reduce the cost, the commonly used and widely available inexpensive webcams were also used instead of the expensive scientific cameras. In order to detect errors around the printed part, six webcams were used, so there were 3 pairs of webcams and each pair were 120 degrees apart. The results indicated that the algorithms are precisely detect the 3-D printing errors around the printed part in shape and size aspects. With this low-cost and open-source approach, the algorithms are ready for wide range of use and applications

    Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views

    No full text
    Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique

    Open source disease analysis system of cactus by artificial intelligence and image processing

    No full text
    There is a growing interest in cactus cultivation because of numerous cacti uses from houseplants to food and medicinal applications. Various diseases impact the growth of cacti. To develop an automated model for the analysis of cactus disease and to be able to quickly treat and prevent damage to the cactus. The Faster R-CNN and YOLO algorithm technique were used to analyze cactus diseases automatically distributed into six groups: 1) anthracnose, 2) canker, 3) lack of care, 4) aphid, 5) rusts and 6) normal group. Based on the experimental results the YOLOv5 algorithm was found to be more effective at detecting and identifying cactus disease than the Faster R-CNN algorithm. Data training and testing with YOLOv5S model resulted in a precision of 89.7% and an accuracy (recall) of 98.5%, which is effective enough for further use in a number of applications in cactus cultivation. Overall the YOLOv5 algorithm had a test time per image of only 26 milliseconds. Therefore, the YOLOv5 algorithm was found to suitable for mobile applications and this model could be further developed into a program for analyzing cactus disease

    Factors effecting real-time optical monitoring of fused filament 3D printing

    Get PDF
    This study analyzes a low-cost reliable real-time optical monitoring platform for fused filament fabrication-based open source 3D printing. An algorithm for reconstructing 3D images from overlapping 2D intensity measurements with relaxed camera positioning requirements is compared with a single-camera solution for single-side 3D printing monitoring. The algorithms are tested for different 3D object geometries and filament colors. The results showed that both of the algorithms with a single- and double-camera system were effective at detecting a clogged nozzle, incomplete project, or loss of filament for a wide range of 3D object geometries and filament colors. The combined approach was the most effective and achieves 100% detection rate for failures. The combined method analyzed here has a better detection rate and a lower cost compared to previous methods. In addition, this method is generalizable to a wide range of 3D printer geometries, which enables further deployment of desktop 3D printing as wasted print time and filament are reduced, thereby improving the economic advantages of distributed manufacturing

    Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views

    Get PDF
    Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique

    Real-Time Eye State Detection System for Driver Drowsiness Using Convolutional Neural Network

    No full text
    One of the top reasons for road accidents that result in injuries and deaths is the dozing off of drivers. In this study eye tracking using a novel open source Internet of Things (IoT)-based system has been developed. This study evaluated three driver\u27s eye recognition algorithms to be integrated into the open source solution to wake drivers as they begin to dose off: 1) Convolutional Neural Network (CNN) with Haar Cascade, 2) 68 facial landmark points and 3) gaze detection in three different face positions for both day and night driving conditions as well as with and without glasses. Each combination of those factor is tested 100 times. The best algorithm is chosen based on the numbers of correct detections and then this algorithm is tested again based on light (day and night), angle of face (left, right, and center), angle of camera (left and right), and glasses (on and off) to detect both blinking and closed eyes. The results show that the most accurate algorithm to detect a driver\u27s eyes is CNN with Haar Cascade algorithm with 94% accurate. The system can detect the status of the eyes of drivers during driving and if drivers close their eyes longer than two seconds, it sounds an alert to wake the driver and avoid the accident. The proposed open source system costs about US100 and could be widely deployed to help reduce accidents on the road throughout the world

    SEACHI 2022 Symposium

    Get PDF
    Southeast Asia that consists of eleven countries, has been proud of its way of life and rich culture and is generally happy to maintain its long comforting tradition. However, the region cannot deny that its diverse population and strategic location have become a center of attention for global players to invest in the region. With the emergence of Industry 4.0, digital transformation has become mandatory for any organizations or nations in Southeast Asia to consider. Through SEACHI (Southeast Asian CHI) Symposium, we aim to grow awareness in HCI and UX to improve the design and development of technology for a living and bring together the Southeast Asian academic researchers and industry practitioners. As HCI is maturing in Asia, we identified the remarkable growth and needs of HCI in the Southeast Asian community. In this symposium, we have several questions that we would like to answer: To what extent the HCI and UX that has been taught and practiced in Southeast Asia met the needs to support the digital transformation initiatives in the region; whether there has been any significant and proper contextualization of the HCI and UX fields; whether HCI and UX are still perceived as a Western mindset instead of a localized approach to make a difference in any projects; whether HCI and UX have become a standard norm in the digital product and design process and how HCI and UX players in Southeast Asia have worked together to create a unique ecosystem. Under the big conference theme "Equity, Justice and Access Commitments,"the symposium aims to bring about equal and fair access for anyone to exchange information and transfer knowledge in this multidisciplinary environment and multi-socioeconomic aspects of research and practice HCI and UX in Southeast Asia

    SEACHI 2022 Symposium: Bringing equality, justice, and access to HCI and UX agenda in Southeast Asia region

    Get PDF
    Southeast Asia that consists of eleven countries, has been proud of its way of life and rich culture and is generally happy to maintain its long comforting tradition. However, the region cannot deny that its diverse population and strategic location have become a center of attention for global players to invest in the region. With the emergence of Industry 4.0, digital transformation has become mandatory for any organizations or nations in Southeast Asia to consider. Through SEACHI (Southeast Asian CHI) Symposium, we aim to grow awareness in HCI and UX to improve the design and development of technology for a living and bring together the Southeast Asian academic researchers and industry practitioners. As HCI is maturing in Asia, we identified the remarkable growth and needs of HCI in the Southeast Asian community. In this symposium, we have several questions that we would like to answer: To what extent the HCI and UX that has been taught and practiced in Southeast Asia met the needs to support the digital transformation initiatives in the region; whether there has been any significant and proper contextualization of the HCI and UX fields; whether HCI and UX are still perceived as a Western mindset instead of a localized approach to make a difference in any projects; whether HCI and UX have become a standard norm in the digital product and design process and how HCI and UX players in Southeast Asia have worked together to create a unique ecosystem. Under the big conference theme "Equity, Justice and Access Commitments,"the symposium aims to bring about equal and fair access for anyone to exchange information and transfer knowledge in this multidisciplinary environment and multi-socioeconomic aspects of research and practice HCI and UX in Southeast Asia
    corecore